1,829 research outputs found

    New Results on the Probabilistic Analysis of Online Bin Packing and its Variants

    Get PDF
    The classical bin packing problem can be stated as follows: We are given a multiset of items {a1, ..., an} with sizes in [0,1], and want to pack them into a minimum number of bins, each of which with capacity one. There are several applications of this problem, for example in the field of logistics: We can interpret the i-th item as time a package deliverer spends for the i-th tour. Package deliverers have a daily restricted working time, and we want to assign the tours such that the number of package deliverers needed is minimized. Another setup is to think of the items as boxes with a standardized basis, but variable height. Then, the goal is to pack these boxes into a container, which is standardized in all three dimensions. Moreover, applications of variants of the classical bin packing problem arise in cloud computing, when we have to store virtual machines on servers. Besides its practical relevance, the bin packing problem is one of the fundamental problems in theoretical computer science: It was proven many years ago that under standard complexity assumptions it is not possible to compute the value of an optimal packing of the items efficiently - classical bin packing is NP-complete. Computing the value efficiently means that the runtime of the algorithm is bounded polynomially in the number of items we have to pack. Besides the offline version, where we know all items at the beginning, also the online version is of interest: Here, the items are revealed one-by-one and have to be packed into a bin immediately and irrevocably without knowing which and how many items will still arrive in the future. Also this version is of practical relevance. In many situations we do not know the whole input at the beginning: For example we are unaware of the requirements of future virtual machines, which have to be stored, or suddenly some more packages have to be delivered, and some deliverers already started their tour. We can think of the classical theoretical analysis of an online algorithm A as follows: An adversary studies the behavior of the algorithm and afterwards constructs a sequence of items I. Then, the performance is measured by the number of used bins by A performing on I, divided by the value of an optimal packing of the items in I. The adversary tries to choose a worst-case sequence so this way to measure the performance is very pessimistic. Moreover, the chosen sequences I often turn out to be artificial: For example, in many cases the sizes of the items increase monotonically over time. Instances in practice are often subject to random influence and therefore it is likely that they are good-natured. In this thesis we analyze the performance of online algorithms with respect to two stochastic models. The first model is the following: The adversary chooses a set of items SI and a distribution F on SI. Then, the items are drawn independently and identically distributed according to F. In the second model the adversary chooses a finite set of items SI and then these items arrive in random order, that is random with respect to the uniform distribution on the set of all possible permutations of the items. It is possible to show that the adversary in the second stochastic model is at least as powerful as in the first one. We can classify the results in this thesis in three parts: In the first part we consider the complexity of classical bin packing and its variants cardinality-constrained and class-constrained bin packing in both stochastic models. That is, we determine if it is possible to construct algorithms that are in expectation nearly optimal for large instances that are constructed according to the stochastic models or if there exist non-trivial lower bounds. Among other things we show that the complexity of class-constrained bin packing differs in the two models under consideration. In the second part we deal with bounded-space bin packing and the dual maximization variant bin covering. We show that it is possible to overcome classical worst-case bounds in both models. In other words, we see that bounded-space algorithms benefit from randomized instances compared to the worst case. Finally, we consider selected heuristics for class-constrained bin packing and the corresponding maximization variant class-constrained bin covering. Here, we note that the different complexity of class-constrained bin packing with respect to the studied stochastic models observed in the first part is not only a theoretical phenomenon, but also takes place for many common algorithmic approaches. Interestingly, when we apply the same algorithmic ideas to class-constrained bin covering, we benefit from both types of randomization similarly. </ul

    Design Concept for High Speed Railway Bridges in Regions with High Seismic Activity and Soft Soil

    Get PDF
    At present, public transport for mid distances is more and more covered by modern high speed railway links with many examples all over the world being already under construction or in the design process. In case of soft soil conditions or urban areas, many of these projects are being realized as an elevated railway structure utilizing regular viaducts. In seismic active regions earthquake design mainly governs the overall design concept of the bridges, particularly of the piers and foundations. Thus, on the one hand, the structure has to exhibit enough stiffness under serviceability conditions (limitation of rail stresses; safety requirements for high speed trams) while on the other hand, flexibility, energy dissipation and ductility is advantageous with respect to a severe earthquake event. Since the earthquake excitation normally governs the whole structural design in case of high seismicity it is necessary to consider the dynamic loading as early as possible in order to come up with an optimal solution. Modem and efficient numerical simulations using e.g. the Finite Element Method are enabling the design engineers to study the bridge structures under static and dynamic loading conditions, taking into account geometrical and physical nonlinear effects. The following paper describes the design process of a high speed train structure which has been planned in Taiwan. All results presented herein have been gained during the tender phase of the project which has been finalized in the beginning of the year 2000. The bridge structure has been designed as simply supported viaducts consisting of 35 m and 30 m spans. Each span is supported by four reinforced concrete columns at each end, which are supported by 4 piles (see Fig. 1). The bridge spans have been designed as pre stressed box girders, which will be prefabricated and installed at the construction site. Hereby, two earthquake levels have been included, moderate intensity for the serviceability check (elastic response of the structure) and severe earthquake for the ultimate load limit check (elasto-plastic response). The evaluation of the dynamic response has been performed with plane and spatial beam models applying linear and non-linear time history analysis. More practical methods, based on linear superposition principles, such as equivalent static loads or multiple mode response spectrum method have been used for comparison. Time history analyses have been performed by utilizing synthetic acceleration time histories (one and multidimensional), compatible with the governing response spectra. The non-linear elasto-plastic analyses for the severe earthquake situation has been carried out using a single pier model. The advantages for the design by performing a full physical nonlinear analysis will be demonstrated. In addition, displacements and rail stresses as well as the influence of the wave-passage effect have been studied by means of an overall dynamic multi-span model, taking into account a non-linear rail-track bond behavior

    The New IDS Corpus Analysis Platform: Challenges and Prospects

    Get PDF
    The present article describes the first stage of the KorAP project, launched recently at the Institut fĂŒr Deutsche Sprache (IDS) in Mannheim, Germany. The aim of this project is to develop an innovative corpus analysis platform to tackle the increasing demands of modern linguistic research. The platform will facilitate new linguistic findings by making it possible to manage and analyse primary data and annotations in the petabyte range, while at the same time allowing an undistorted view of the primary linguistic data, and thus fully satisfying the demands of a scientific tool. An additional important aim of the project is to make corpus data as openly accessible as possible in light of unavoidable legal restrictions, for instance through support for distributed virtual corpora, user-defined annotations and adaptable user interfaces, as well as interfaces and sandboxes for user-supplied analysis applications. We discuss our motivation for undertaking this endeavour and the challenges that face it. Next, we outline our software implementation plan and describe development to-date

    EU DEMO Remote Maintenance System development during the Pre-Concept Design Phase

    Get PDF
    During the EU DEMO Pre-Concept Design Phase, the remote maintenance team developed maintenance strategies and systems to meet the evolving plant maintenance requirements. These were constrained by the proposed tokamak architecture and the challenging environments but considered a range of port layouts and handling system designs. The design-driving requirements were to have short maintenance durations and to demonstrate power plant relevant technologies. Work concentrated on the in-vessel maintenance systems, where the design constraints are the most challenging and the potential impact on the plant design is highest. A robust blanket handling system design was not identified during the Pre-Concept Design Phase. Novel enabling technologies were identified and, where these were critical to the maintenance strategy and not being pursued elsewhere, proof-of-principle designs were developed and tested. Technology development focused on pipe joining systems such as laser bore cutting and welding, pipe alignment, and on the control systems for handling massive blankets. Maintenance studies were also conducted on the ex-vessel plant to identify the additional transport volumes required to support the plant layout. The strategic implications of using vessel casks, and of using containment cells with cell casks, was explored. This was motivated by the costs associated with the storage of casks, one of several ex-vessel systems that can drive the overall plant layout. This paper introduces the remote maintenance system designs, describes the main developments and achievements, and presents conclusions, lessons learned and recommendations for future work

    Status Of The FAIR Synchrotron Projects SIS18 And SIS100

    Get PDF
    A large fraction of the program to upgrade the existingheavy ion synchrotron SIS18 as injector for the FAIR synchrotron SIS100 has been successfully completed. With the achieved technical status, a major increase of theaccelerated number of heavy ions could be reached. Thenow available performance especially demonstrates thefeasibility of high intensity beams of medium charge stateheavy ions with a sufficient control of the dynamicvacuum and connected charge exchange loss. Two furtherupgrade measures, the installation of additional magneticalloy (MA) acceleration cavities and the exchange of themain dipole power converter, are presently beingimplemented. For the FAIR synchrotron SIS100, theprocurement of all major components with longproduction times has been started. With the delivery andtesting of several pre-series components, the phase ofoutstanding technical reserach and developments could becompleted and the readiness for series productionachieved

    Tip- and laser-based 3D nanofabrication in extended macroscopic working areas

    Get PDF
    The field of optical lithography is subject to intense research and has gained enormous improvement. However, the effort necessary for creating structures at the size of 20 nm and below is considerable using conventional technologies. This effort and the resulting financial requirements can only be tackled by few global companies and thus a paradigm change for the semiconductor industry is conceivable: custom design and solutions for specific applications will dominate future development (Fritze in: Panning EM, Liddle JA (eds) Novel patterning technologies. International society for optics and photonics. SPIE, Bellingham, 2021. https://doi.org/10.1117/12.2593229). For this reason, new aspects arise for future lithography, which is why enormous effort has been directed to the development of alternative fabrication technologies. Yet, the technologies emerging from this process, which are promising for coping with the current resolution and accuracy challenges, are only demonstrated as a proof-of-concept on a lab scale of several square micrometers. Such scale is not adequate for the requirements of modern lithography; therefore, there is the need for new and alternative cross-scale solutions to further advance the possibilities of unconventional nanotechnologies. Similar challenges arise because of the technical progress in various other fields, realizing new and unique functionalities based on nanoscale effects, e.g., in nanophotonics, quantum computing, energy harvesting, and life sciences. Experimental platforms for basic research in the field of scale-spanning nanomeasuring and nanofabrication are necessary for these tasks, which are available at the Technische UniversitÀt Ilmenau in the form of nanopositioning and nanomeasuring (NPM) machines. With this equipment, the limits of technical structurability are explored for high-performance tip-based and laser-based processes for enabling real 3D nanofabrication with the highest precision in an adequate working range of several thousand cubic millimeters

    Mechanical design of the optical modules intended for IceCube-Gen2

    Get PDF
    IceCube-Gen2 is an expansion of the IceCube neutrino observatory at the South Pole that aims to increase the sensitivity to high-energy neutrinos by an order of magnitude. To this end, about 10,000 new optical modules will be installed, instrumenting a fiducial volume of about 8 km3. Two newly developed optical module types increase IceCube’s current sensitivity per module by a factor of three by integrating 16 and 18 newly developed four-inch PMTs in specially designed 12.5-inch diameter pressure vessels. Both designs use conical silicone gel pads to optically couple the PMTs to the pressure vessel to increase photon collection efficiency. The outside portion of gel pads are pre-cast onto each PMT prior to integration, while the interiors are filled and cast after the PMT assemblies are installed in the pressure vessel via a pushing mechanism. This paper presents both the mechanical design, as well as the performance of prototype modules at high pressure (70 MPa) and low temperature (−40∘C), characteristic of the environment inside the South Pole ice

    The next generation neutrino telescope: IceCube-Gen2

    Get PDF
    The IceCube Neutrino Observatory, a cubic-kilometer-scale neutrino detector at the geographic South Pole, has reached a number of milestones in the field of neutrino astrophysics: the discovery of a high-energy astrophysical neutrino flux, the temporal and directional correlation of neutrinos with a flaring blazar, and a steady emission of neutrinos from the direction of an active galaxy of a Seyfert II type and the Milky Way. The next generation neutrino telescope, IceCube-Gen2, currently under development, will consist of three essential components: an array of about 10,000 optical sensors, embedded within approximately 8 cubic kilometers of ice, for detecting neutrinos with energies of TeV and above, with a sensitivity five times greater than that of IceCube; a surface array with scintillation panels and radio antennas targeting air showers; and buried radio antennas distributed over an area of more than 400 square kilometers to significantly enhance the sensitivity of detecting neutrino sources beyond EeV. This contribution describes the design and status of IceCube-Gen2 and discusses the expected sensitivity from the simulations of the optical, surface, and radio components

    Sensitivity of IceCube-Gen2 to measure flavor composition of Astrophysical neutrinos

    Get PDF
    The observation of an astrophysical neutrino flux in IceCube and its detection capability to separate between the different neutrino flavors has led IceCube to constraint the flavor content of this flux. IceCube-Gen2 is the planned extension of the current IceCube detector, which will be about 8 times larger than the current instrumented volume. In this work, we study the sensitivity of IceCube-Gen2 to the astrophysical neutrino flavor composition and investigate its tau neutrino identification capabilities. We apply the IceCube analysis on a simulated IceCube-Gen2 dataset that mimics the High Energy Starting Event (HESE) classification. Reconstructions are performed using sensors that have 3 times higher quantum efficiency and isotropic angular acceptance compared to the current IceCube optical modules. We present the projected sensitivity for 10 years of data on constraining the flavor ratio of the astrophysical neutrino flux at Earth by IceCube-Gen2
    • 

    corecore